bayesian optimizer
Export Reviews, Discussions, Author Feedback and Meta-Reviews
They bring two contributions to existing autoML methods: a meta-learning component, in which they use a list of past datasets to warm-start the bayesian optimizer, and a ensemble construction component, which reuses the ranking established by the bayesian optimizer to combine the best methods instead of just using the one found by the bayesian optimizer, for more robustness. They compare their system, auto-sklearn, to an existing autoML system, Auto-WEKA, and find that they outperform it in a majority of cases. They also compare variations of their system without (some of) the two novel components (meta-learning and ensemble construction) and show that the meta-learning component is the most helpful.
Optimizing Collaborative Robotics since Pre-Deployment via Cyber-Physical Systems' Digital Twins
Cella, Christian, Faroni, Marco, Zanchettin, Andrea, Rocco, Paolo
The collaboration between humans and robots re-quires a paradigm shift not only in robot perception, reasoning, and action, but also in the design of the robotic cell. This paper proposes an optimization framework for designing collaborative robotics cells using a digital twin during the pre-deployment phase. This approach mitigates the limitations of experience-based sub-optimal designs by means of Bayesian optimization to find the optimal layout after a certain number of iterations. By integrating production KPIs into a black-box optimization frame-work, the digital twin supports data-driven decision-making, reduces the need for costly prototypes, and ensures continuous improvement thanks to the learning nature of the algorithm. The paper presents a case study with preliminary results that show how this methodology can be applied to obtain safer, more efficient, and adaptable human-robot collaborative environments.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Spain > Aragón > Zaragoza Province > Zaragoza (0.04)
- Europe > Italy (0.04)
- (2 more...)
Bespoke Nanoparticle Synthesis and Chemical Knowledge Discovery Via Autonomous Experimentations
Yoo, Hyuk Jun, Kim, Nayeon, Lee, Heeseung, Kim, Daeho, Ow, Leslie Tiong Ching, Nam, Hyobin, Kim, Chansoo, Lee, Seung Yong, Lee, Kwan-Young, Kim, Donghun, Han, Sang Soo
The optimization of nanomaterial synthesis using numerous synthetic variables is considered to be extremely laborious task because the conventional combinatorial explorations are prohibitively expensive. In this work, we report an autonomous experimentation platform developed for the bespoke design of nanoparticles (NPs) with targeted optical properties. This platform operates in a closed-loop manner between a batch synthesis module of NPs and a UV- Vis spectroscopy module, based on the feedback of the AI optimization modeling. With silver (Ag) NPs as a representative example, we demonstrate that the Bayesian optimizer implemented with the early stopping criterion can efficiently produce Ag NPs precisely possessing the desired absorption spectra within only 200 iterations (when optimizing among five synthetic reagents). In addition to the outstanding material developmental efficiency, the analysis of synthetic variables further reveals a novel chemistry involving the effects of citrate in Ag NP synthesis. The amount of citrate is a key to controlling the competitions between spherical and plate-shaped NPs and, as a result, affects the shapes of the absorption spectra as well. Our study highlights both capabilities of the platform to enhance search efficiencies and to provide a novel chemical knowledge by analyzing datasets accumulated from the autonomous experimentations.
A Programmable Approach to Model Compression
Joseph, Vinu, Muralidharan, Saurav, Garg, Animesh, Garland, Michael, Gopalakrishnan, Ganesh
Deep neural networks frequently contain far more weights, represented at a higher precision, than are required for the specific task which they are trained to perform. Consequently, they can often be compressed using techniques such as weight pruning and quantization that reduce both model size and inference time without appreciable loss in accuracy. Compressing models before they are deployed can therefore result in significantly more efficient systems. However, while the results are desirable, finding the best compression strategy for a given neural network, target platform, and optimization objective often requires extensive experimentation. Moreover, finding optimal hyperparameters for a given compression strategy typically results in even more expensive, frequently manual, trial-and-error exploration. In this paper, we introduce a programmable system for model compression called Condensa. Users programmatically compose simple operators, in Python, to build complex compression strategies. Given a strategy and a user-provided objective, such as minimization of running time, Condensa uses a novel sample-efficient constrained Bayesian optimization algorithm to automatically infer desirable sparsity ratios. Our experiments on three real-world image classification and language modeling tasks demonstrate memory footprint reductions of up to 65x and runtime throughput improvements of up to 2.22x using at most 10 samples per search. We have released a reference implementation of Condensa at https://github.com/NVlabs/condensa.
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Alberta (0.14)
- North America > United States > Utah (0.04)